642 research outputs found

    Using camera motion to identify different types of American football plays

    Full text link
    This paper presents a method that uses camera motion parameters to recognise 7 types of American football plays. The approach is based on the motion information extracted from the video and it can identify short and long pass plays, short and long running plays, quarterback sacks, punt plays and kickoff plays. This method has the advantage that it is fast and it does not require player or ball tracking. The system was trained and tested using 782 plays and the results show that the system has an overall classification accuracy of 68%.<br /

    Permutation Models for Collaborative Ranking

    Full text link
    We study the problem of collaborative filtering where ranking information is available. Focusing on the core of the collaborative ranking process, the user and their community, we propose new models for representation of the underlying permutations and prediction of ranks. The first approach is based on the assumption that the user makes successive choice of items in a stage-wise manner. In particular, we extend the Plackett-Luce model in two ways - introducing parameter factoring to account for user-specific contribution, and modelling the latent community in a generative setting. The second approach relies on log-linear parameterisation, which relaxes the discrete-choice assumption, but makes learning and inference much more involved. We propose MCMC-based learning and inference methods and derive linear-time prediction algorithms

    Detection of setting and subject information in documentary video

    Full text link
    Interpretation of video information is a difficult task for computer vision and machine intelligence. In this paper we examine the utility of a non-image based source of information about video contents, namely the shot list, and study its use in aiding image interpretation. We show how the shot list may be analysed to produce a simple summary of the \u27who and where\u27 of a documentary or interview video. In order to detect the subject of a video we use the notion of a \u27shot syntax\u27 of a particular genre to isolate actual interview sections

    Towards automatic extraction of expressive elements from motion pictures : tempo

    Full text link
    This paper proposes a unique computational approach to extraction of expressive elements of motion pictures for deriving high level semantics of stories portrayed, thus enabling better video annotation and interpretation systems. This approach, motivated and directed by the existing cinematic conventions known as film grammar, as a first step towards demonstrating its effectiveness, uses the attributes of motion and shot length to define and compute a novel measure of tempo of a movie. Tempo flow plots are defined and derived for four full-length movies and edge analysis is performed leading to the extraction of dramatic story sections and events signaled by their unique tempo. The results confirm tempo as a useful attribute in its own right and a promising component of semantic constructs such as tone or mood of a film
    • …
    corecore